有关项目的部分信息的拍卖广泛用于现实世界应用中,但是基本机制的理论支持有限。在这项工作中,我们研究了这些类型的机制的机器学习公式,从买家的角度提出了算法,这些算法是没有重新格雷的。具体来说,希望最大化其公用事业的买家与一系列$ t $圆的平台反复互动。在每个回合中,都从未知分布中汲取新项目,并且该平台以不完整的“掩盖”信息发布了价格。然后,买家决定是否购买该商品。我们将这个问题正式化为在线学习任务,其目标是对近视甲骨文的遗憾,该甲骨文对物品的分布和卖方的掩盖功能具有完美的了解。当买家知道项目的分布和蒙版是一个simhash函数映射$ \ mathbb {r}^d $ to $ \ {0,1 \}^{\ ell} $时,我们的算法很遗憾$ \ tilde o(((td \ ell)^{1/2})$。在完全不可知的设置中,当掩码是任意函数映射到一组$ n $并且价格随机映射时,我们的算法很遗憾$ \ tilde o((TN)^{1/2})$。
translated by 谷歌翻译
分支和切割是用于解决整数程序的最广泛使用的算法,该算法由CPLEX和GUROBI等商业求解器采用。分支和切割具有各种可调参数,对其构建的搜索树的大小产生了巨大影响,而是充满挑战手工曲调。一种越来越流行的方法是使用机器学习来调整这些参数:使用从应用程序域中的训练集的整数程序集,目标是找到一个具有强烈预测性能的配置,从同一域中取消了未执行整数程序。如果训练集太小,则配置可能对培训集具有良好的性能,但对未来整数程序的性能差。在本文中,我们证明了这种程序的样本复杂性保证,这绑定了培训集应该是如何确保任何配置,其对培训集的平均性能接近其预期的未来性能。我们的保证适用于控制分支和切割的最重要方面的参数:节点选择,分支约束选择和切割平面选择,并且比在现有研究中发现的那些更锐利,更为一般。
translated by 谷歌翻译
Deep convolutional neural networks (CNNs) have been widely used for medical image segmentation. In most studies, only the output layer is exploited to compute the final segmentation results and the hidden representations of the deep learned features have not been well understood. In this paper, we propose a prototype segmentation (ProtoSeg) method to compute a binary segmentation map based on deep features. We measure the segmentation abilities of the features by computing the Dice between the feature segmentation map and ground-truth, named as the segmentation ability score (SA score for short). The corresponding SA score can quantify the segmentation abilities of deep features in different layers and units to understand the deep neural networks for segmentation. In addition, our method can provide a mean SA score which can give a performance estimation of the output on the test images without ground-truth. Finally, we use the proposed ProtoSeg method to compute the segmentation map directly on input images to further understand the segmentation ability of each input image. Results are presented on segmenting tumors in brain MRI, lesions in skin images, COVID-related abnormality in CT images, prostate segmentation in abdominal MRI, and pancreatic mass segmentation in CT images. Our method can provide new insights for interpreting and explainable AI systems for medical image segmentation. Our code is available on: \url{https://github.com/shengfly/ProtoSeg}.
translated by 谷歌翻译
The lack of standardization is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations due to differences in hardware and acquisition parameters. In recent years, MR harmonization using image synthesis with disentanglement has been proposed to compensate for the undesired contrast variations. Despite the success of existing methods, we argue that three major improvements can be made. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both Tw-weighted and T2-weighted images must be available), which limits their applicability. Third, existing methods generally are sensitive to imaging artifacts. In this paper, we present a novel approach, Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), to address these three issues. We first propose an anatomy fusion module that enables HACA3 to respect the anatomical differences between MR contrasts. HACA3 is also robust to imaging artifacts and can be trained and applied to any set of MR contrasts. Experiments show that HACA3 achieves state-of-the-art performance under multiple image quality metrics. We also demonstrate the applicability of HACA3 on downstream tasks with diverse MR datasets acquired from 21 sites with different field strengths, scanner platforms, and acquisition protocols.
translated by 谷歌翻译
The application of natural language processing (NLP) to cancer pathology reports has been focused on detecting cancer cases, largely ignoring precancerous cases. Improving the characterization of precancerous adenomas assists in developing diagnostic tests for early cancer detection and prevention, especially for colorectal cancer (CRC). Here we developed transformer-based deep neural network NLP models to perform the CRC phenotyping, with the goal of extracting precancerous lesion attributes and distinguishing cancer and precancerous cases. We achieved 0.914 macro-F1 scores for classifying patients into negative, non-advanced adenoma, advanced adenoma and CRC. We further improved the performance to 0.923 using an ensemble of classifiers for cancer status classification and lesion size named entity recognition (NER). Our results demonstrated the potential of using NLP to leverage real-world health record data to facilitate the development of diagnostic tests for early cancer prevention.
translated by 谷歌翻译
血氧水平依赖性(BOLD)用母体高氧可以评估胎盘内的氧运输,并已成为研究胎盘功能的有前途的工具。测量信号随着时间的变化需要在时间序列的每个体积中分割胎盘。由于大胆的时间序列中的数量大量,现有研究依靠注册将所有卷映射到手动分段模板。由于胎盘由于胎儿运动,母体运动和收缩而导致大变形,因此这种方法通常会导致大量废弃体积,而注册方法失败。在这项工作中,我们提出了一个基于U-NET神经网络体系结构的机器学习模型,以自动以粗体MRI分割胎盘,并将其应用于时间序列中的每个卷。我们使用边界加权损失函数来准确捕获胎盘形状。我们的模型经过训练和测试,并在91位包含健康胎儿的受试者,胎儿生长限制的胎儿以及BMI高的母亲中进行了测试。当与地面真实标签匹配时,我们的骰子得分为0.83 +/- 0.04,并且我们的模型在粗体时间序列中可靠地分割量氧和高氧点的量。我们的代码和训练有素的模型可在https://github.com/mabulnaga/automatic-placenta-mentegation上获得。
translated by 谷歌翻译
电缆在许多环境中无处不在,但容易出现自我闭合和结,使它们难以感知和操纵。挑战通常会随着电缆长度而增加:长电缆需要更复杂的松弛管理和策略,以促进可观察性和可及性。在本文中,我们专注于使用双边机器人自动弄清长达3米的电缆。我们开发了新的运动原语,以有效地解开长电缆和专门用于此任务的新型Gripper Jaws。我们提出了缠结操作(SGTM)的滑动和抓握,该算法将这些原始物与RGBD视觉构成迭代性毫无障碍。SGTM在隔离的外手上取消了67%的成功率,图8节和更复杂的配置上的50%。可以在https://sites.google.com/view/rss-2022-untangling/home上找到补充材料,可视化和视频。
translated by 谷歌翻译
模拟到现实的转移已成为一种流行且非常成功的方法,用于培训各种任务的机器人控制政策。但是,确定在模拟中训练的政策何时准备将其转移到物理世界通常是一个挑战。部署经过很少的模拟数据训练的策略可能会导致物理硬件的不可靠和危险行为。另一方面,模拟中的过度训练会导致策略过度拟合模拟器的视觉外观和动力学。在这项工作中,我们研究了自动确定在模拟中训练的策略何时可以可靠地转移到物理机器人的策略。我们在机器人织物操纵的背景下专门研究了这些思想,因为成功建模织物的动力学和视觉外观的困难,成功的SIM2Real转移尤其具有挑战性。导致织物平滑任务表明我们的切换标准与实际的性能很好地相关。特别是,我们基于信心的切换标准在培训总预算的55-60%之内达到了87.2-93.7%的平均最终面料覆盖率。有关代码和补充材料,请参见https://tinyurl.com/lsc-case。
translated by 谷歌翻译
世界各地的隐私法律和法规的景观是复杂而不断变化的。国家和超国家法律,协议,法令和其他政府发行的规则构成了公司必须遵循的拼凑而成才能在国际上进行运作。为了检查该拼凑而成的状态和演变,我们介绍了1,043条隐私法,法规和准则的政府隐私指示语料库或GPI语料库,涵盖了182个司法管辖区。该语料库可以对法律焦点进行大规模定量和定性检查。我们检查了创建GPI的时间分布,并说明了过去50年中隐私立法的急剧增加,尽管较细粒度的检查表明,增加的速度取决于GPIS所说的个人数据类型。我们的探索还表明,大多数隐私法分别解决了相对较少的个人数据类型,这表明全面的隐私立法仍然很少见。此外,主题建模结果显示了GPI中常见主题的普遍性,例如财务,医疗保健和电信。最后,我们将语料库释放到研究界,以促进进一步的研究。
translated by 谷歌翻译
在几乎不可预测且通常严重的主题运动的情况下获得的多个MR Slices的胎儿大脑的体积重建是一项具有挑战性的任务,对切片转换的初始化非常敏感。我们建议使用经过合成转换数据训练的变压器提出了一种新型的切片到体积的注册方法,该数据将MR Slices的多个堆栈模拟为序列。通过注意机制,我们的模型会自动检测切片之间的相关性,并使用来自其他切片的信息预测一个切片的转换。我们还估计了基础3D卷,以帮助切片到体积的注册,并交替更新音量和转换以提高准确性。合成数据的结果表明,与现有的最新方法相比,我们的方法可实现较低的注册误差和更好的重建质量。还进行了使用现实世界中MRI数据的实验,以证明该模型在严重的胎儿运动下提高3D重建质量的能力。
translated by 谷歌翻译